Vector Quantization by Minimizing Kullback-Leibler Divergence

نویسندگان

  • Lan Yang
  • Jingbin Wang
  • Yujin Tu
  • Prarthana Mahapatra
  • Nelson Cardoso
چکیده

This paper proposes a new method for vector quantization by minimizing the Kullback-Leibler Divergence between the class label distributions over the quantization inputs, which are original vectors, and the output, which is the quantization subsets of the vector set. In this way, the vector quantization output can keep as much information of the class label as possible. An objective function is constructed and we also developed an iterative algorithm to minimize it. The new method is evaluated on bag-of-features based image classification problem.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Maximum Likelihood Topographic Map Formation

We introduce a new unsupervised learning algorithm for kernel-based topographic map formation of heteroscedastic gaussian mixtures that allows for a unified account of distortion error (vector quantization), log-likelihood, and Kullback-Leibler divergence.

متن کامل

Distributed Vector Quantization Based on Kullback-Leibler Divergence

The goal of vector quantization is to use a few reproduction vectors to represent original vectors/data while maintaining the necessary fidelity of the data. Distributed signal processing has received much attention in recent years, since in many applications data are dispersedly collected/stored in distributed nodes over networks, but centralizing all these data to one processing center is som...

متن کامل

A Novel Vector Quantizer for Pattern Classification Tasks

Abstract— We present a novel vector quantization method for pattern classification tasks. The input space is quantized into volume regions by code-vectors formed by weights of neurons. During training, the volume regions are merged and split, depending upon the ambiguity in classification, measured using Kullback-Leibler divergence. The heuristic followed is to split ambiguous regions, and merg...

متن کامل

Divergence-based classification in learning vector quantization

We discuss the use of divergences in dissimilarity based classification. Divergences can be employed whenever vectorial data consists of non-negative, potentially normalized features. This is, for instance, the case in spectral data or histograms. In particular, we introduce and study Divergence Based Learning Vector Quantization (DLVQ). We derive cost function based DLVQ schemes for the family...

متن کامل

Divergence based Learning Vector Quantization

We suggest the use of alternative distance measures for similarity based classification in Learning Vector Quantization. Divergences can be employed whenever the data consists of non-negative normalized features, which is the case for, e.g., spectral data or histograms. As examples, we derive gradient based training algorithms in the framework of Generalized Learning Vector Quantization based o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1501.07681  شماره 

صفحات  -

تاریخ انتشار 2015